6 research outputs found

    Place Recognition by Per-Location Classifiers

    Get PDF
    Place recognition is formulated as a task of finding the location where the query image was captured. This is an important task that has many practical applications in robotics, autonomous driving, augmented reality, 3D reconstruction or systems that organize imagery in geographically structured manner. Place recognition is typically done by finding a reference image in a large structured geo-referenced database. In this work, we first address the problem of building a geo-referenced dataset for place recognition. We describe a framework for building the dataset from the street-side imagery of the Google Street View that provides panoramic views from positions along many streets, cities and rural areas worldwide. Besides of downloading the panoramic views and ability to transform them into a set of perspective images, the framework is capable of getting underlying scene depth information. Second, we aim at localizing a query photograph by finding other images depicting the same place in a large geotagged image database. This is a challenging task due to changes in viewpoint, imaging conditions and the large size of the image database. The contribution of this work is two-fold; (i) we cast the place recognition problem as a classification task and use the available geotags to train a classifier for each location in the database in a similar manner to per-exemplar SVMs in object recognition, and (ii) as only a few positive training examples are available for each location, we propose two methods to calibrate all the per-location SVM classifiers without the need for additional positive training data. The first method relies on p-values from statistical hypothesis testing and uses only the available negative training data. The second method performs an affine calibration by appropriately normalizing the learned classifier hyperplane and does not need any additional labeled training data. We test the proposed place recognition method with the bag-of-visual-words and Fisher vector image representations suitable for large scale indexing. Experiments are performed on three datasets: 25,000 and 55,000 geotagged street view images of Pittsburgh, and the 24/7 Tokyo benchmark containing 76,000 images with varying illumination conditions. The results show improved place recognition accuracy of the learned image representation over direct matching of raw image descriptors.Katedra kybernetik

    High-Quality Vector Field and Direct Jacobian Estimation

    No full text
    In this paper, a correlation method for direct vorticity computation of the fluid velocity field is introduced. The correlation method allows an affine transformation of a correlation window. It is shown that Lagrangian field of displacements, considering the linear terms only, satisfies the requirements for affine transformation and using the Normalized-Cross-Correlation (NCC) together with affine transformation and Newton-Raphson iterative method, six parameters defining this transformation can be estimated. Using these parameters, both displacement and vorticity can be computed directly without differentiation of the vector field

    High-Quality Vector Field and Direct Vorticity Estimation Uusing the Affine Correlation Method

    No full text
    We present a correlation method for direct vorticity computation of the fluid velocity field. The correlation method allows the affine transformation of a correlation window. It is shown that Lagrangian field of displacements is locally affine considering linear terms only. Using the Normalized-Cross-Correlation (NCC) combined with affine transformation and Newton-Raphson iterative method, six parameters defining this transformation are estimated. Subsequently, using these parameters, both displacement and vorticity can be computed, hence, these properties are being obtained for every single pixel directly during the correlation procedure

    Affine Correlation Method for Direct Vorticity, Skew and Displacement Estimation

    No full text
    In this paper, a correlation method for direct vorticity computation of the fluid velocity field is introduced. The correlation method considering the affine transformation of a correlation window is presented. It is shown, that Lagrangian field of displacements, con- sidering the linear terms only, satis- fyes the requirements for affine transfor- mation. Using the Normalized–Cross– Correlation together with affine trans- formation and Newton-Raphson itera- tive method, six parameters defining the transformation can be estimated. Subse- quently, both displacement and vorticity can be computed using the above men- tioned parameters, hence, the vorticty is being obtained for every single point di- rectly during the correlation procedure

    Learning and calibrating per-location classifiers for visual place recognition

    Get PDF
    International audienceThe aim of this work is to localize a query photograph by finding other images depicting the same place in a large geotagged image database. This is a challenging task due to changes in viewpoint, imaging conditions and the large size of the image database. The contribution of this work is twofold. First, we cast the place recognition problem as a classification task and use the available geo-tags to train a classifier for each location in the database in a similar manner to per-exemplar SVMs in object recognition. Second, as only one or a few positive training examples are available for each location, we propose two methods to calibrate all the per-location SVM classifiers without the need for additional positive training data. The first method relies on p-values from statistical hypothesis testing and uses only the available negative training data. The second method performs an affine calibration by appropriately normalizing the learnt classifier hyperplane and does not need any additional labelled training data. We test the proposed place recognition method with the bag-of-visual-words and Fisher vector image representations suitable for large scale indexing. Experiments are performed on three datasets: 25,000 and 55,000 geotagged street view images of Pittsburgh, and the 24/7 Tokyo benchmark containing 76, 000 images with varying illumination conditions. The results show improved Query image Retrieved locations Calibrated e-SVMs classifiers Fig. 1: The goal of this work is to localize a query photograph (left top) by finding other images of the same place in a large geotagged image database (right column). We cast the problem as a classification task and learn a classifier for each location in the database. We develop two procedures to calibrate the outputs of the large number of per-location classifiers without the need for additional labeled training data. place recognition accuracy of the learnt image representation over direct matching of raw image descriptors
    corecore